Market Roundup

March 12, 2004

Defining On Demand: One Step at a Time

HP Announces New Blade and SMB Servers

IBM Announces TotalStorage Productivity Center

Density and Critical Mass

 


Defining On Demand: One Step at a Time

By Jim Balderston

IBM announced this week it has acquired Trigo Technologies, a privately held maker of information management middleware based in Brisbane, California. Trigo’s products are designed to help manage product information across an enterprise and its supply chain, and the company has customers among retail, manufacturing, and distribution entities, IBM said. These customers use Trigo’s products to synchronize and manage inventory assets for a range of enterprise and customer touch points. IBM indicated that the Trigo technology would be integrated into its WebSphere middleware offering focused on supply chain management. IBM also stated in various press reports that the Trigo technology will be integrated into its ongoing radio frequency identification (RFID) efforts, in which small frequency-emitting tags allow for faster identification and tracking of inventory moving in and out of an enterprise. Financial details of the deal were not disclosed.

There has been a lot of talk about “On Demand” or “Adaptive Enterprise” from companies like IBM and HP, and much of that discussion centers on the strategic vision these enterprise IT vendors are offering. For IBM, On Demand computing means largely integrating all IT assets into a singular whole, with transparency and manageability of data and assets as a key element of this vision. But with the Trigo acquisition, we see something more intriguing: the seeds of the actual implementation of the coming On Demand world.

Taking Trigo and the RFID technologies together, an enterprise begins to gain real-time data on where every widget, cog, or sprocket in its collective possession actually resides. Much has been made in recent months about the potential of RFID, not only by IBM but also by other IT giants like such as Sun and HP. We believe there is good reason for such talk, as RFID promises to be a key piece in changing the granularity of demand and supply cycles. Nonetheless, we do not expect that RFID tags will be deployed universally at least to start. Initially, we see incremental baby steps, such as using the tags to identify larger pieces of shipped inventory, such as cargo containers. As the technology expands its reach, individual pallets, then individual cases, and finally individual widgets will be tagged and monitored. Their exact number, location, and status will be accessible not only to an enterprise but to its customers and partners as well. In other words, disparities between physical objects and data about those objects will shrink significantly perhaps replacing a commonplace retail phrase, “let me see if we have some more in the back” with “yes, we can get you one from the twenty-four in the back.” As this happens, the meaning and full scope of “On Demand” will morph from being data-centric into being asset-centric, where, in our mind, the real value of on demand environments will be found.

HP Announces New Blade and SMB Servers

By Charles King

HP has announced a pair of new server products: the Proliant BL30p blade server and the Proliant ML110, the first product in the company’s new low-cost ML100 server series. The dual-Xeon BL30p shares existing BLp-class racks and enclosures, but doubles previous maximum server/processor capacity by means of a specially designed sleeve that allows two blades to be plugged into a single slot. As a result, HP’s BLp rack can support up to sixteen blades and thirty-two processors, while each enclosure can support up to six racks. HP said that the BL30p is designed for application servers, Web hosting, ecommerce, computational clusters, and grid computing, and is optimized for compute density with little or no local storage. The BL30p supports both Windows and Linux. The Proliant ML110 is a single processor tower server HP said was designed to meet the file sharing, Web and mail messaging, and general purpose needs of SMB customers. The ML110 is available with either an Intel Celeron or Pentium4 processor, and offers ATA or SCSI hard drive options. The ML110 comes preloaded with Windows Small Business Server and is currently available at list prices beginning at $499. The ML110 also supports Linux and Netware. The BL30p is expected to become available during the second calendar quarter and will support both Windows and Linux. Pricing and full specifications will be announced at that time.

The IT marketplace offers a constantly shifting landscape of rising and falling vendor fortunes, but the view has been especially gloomy for HP’s enterprise products group since the company’s acquisition of Compaq. While most industry pundits mistakenly quantified the deal simply as a PC-focused adventure worth of Don Quixote, significant benefits for HP including market leadership status in IA-32-based servers and Linux systems largely slipped under the radar. Unfortunately for HP, its server and storage businesses have suffered the same pressures from below (Dell) and above (IBM) that have plagued the company’s other product divisions, except for printing and imaging, of course. So what is HP to do? The company’s new servers offer some inkling about HP’s strategic intentions. On the low end, the ML110 offers a price/performance value story with a particularly SMB spin. Is the story believable? Maybe. HP has always had a strong presence among SMBs, but Dell is driving aggressively and successfully into that same market while deriving considerably greater financial benefits. The fact is that no vendor, at least so far, has been able to out-Dell Dell on the low end, including HP. HP may spin the ML110 as a SMB-specific solution, but it also offers simple proof that the same commoditization pressures that have warped the PC market are moving inevitably up the enterprise IT solution chain, a particularly unhappy situation for HP.

The BL30p is a somewhat different kettle of gumbo. IBM dominated the blade server space with its BladeCenter solution, which set the pace for combining density and overall system integration. While HP was first to market with blades, its Xeon-based BLp solutions offer just over half as many blades/processors per rack/enclosure as BladeCenter. The BL30p’s blade sleeve technology corrects that disparity, but with a notable loss of local storage capacity. This is not particularly surprising, since if you try to squeeze two pieces of anything into a space originally designed for one, something has to give. However, this same simple doubling is also likely to impact the BLp’s overall configuration flexibility. Until the BL30p’s actual cost and specifications become available, it is impossible to say just how it will compare to competitors’ offerings. However, from the look of things, HP’s method of doubling blade capacity is likely to come at a significant cost.

IBM Announces TotalStorage Productivity Center

By Charles King

IBM has announced the new TotalStorage Productivity Center, a suite of storage infrastructure management software to centralize, automate, and simplify the management of complex, heterogeneous enterprise storage environments. According to IBM, the new suite is designed to help companies better control the rise in consumption of storage hardware while maintaining system uptime that can lower costs across the enterprise. The new software products include tools to centrally manage multiple storage hardware systems, heterogeneous SANs (including error detection/analysis), and storage resources (including performance monitoring, reporting, and real-time alerts). The TotalStorage Productivity Center is part of IBM’s new TotalStorage Open Software Family, which brings together the company’s entire storage management, automation, and virtualization solutions, including products from Tivoli and IBM Storage. The TotalStorage Productivity Center will become available in May with configurations starting at $5,000.

Parsing IT industry press releases often resembles the reading of tea leaves, but some make one wish for a stronger brew. IBM’s TotalStorage Productivity Center is a case in point. To begin, the announcement focuses on a number of notable storage management capabilities: centralized automated management, heterogeneous systems and SAN issues, storage virtualization, and infrastructure resource/performance products. In other words, IBM’s announcement offers solutions for some of the key issues faced by and focused on by virtually every enterprise storage vendor. The problem is that it in no way explains what these proposed solutions will do, how they will work, or whose products other than IBM’s they will work with. By doing so, the announcement undercuts related IBM storage solutions such as the SAN Volume Controller, SAN integration Server, and SAN File System. In addition, it deals elliptically (at best) with IBM’s decision to unite all of its storage software products into the new TotalStorage Open Software Family. While some may consider this a simple branding exercise, we believe the move is more significant and should help IBM create a more seamless and unified message around its storage products and strategy.

This being the case, why are such important messages submerged in rhetorical obscurity? We can think of three possible reasons. First, conventional wisdom suggests that any truth told to the market informs your competitors along with your customers. As a result, it is better to offer a confusing (to your rivals, at least) story that you can clarify with the folks you actually do business with. Second, further conventional wisdom posits that altering direction can be regarded, depending entirely on one’s predisposition and point of view, as a clear-minded victory or ignominious defeat. In other words, even if a change in course allows one to avoid a nasty fall off a particularly dangerous cliff, it is better to brush off the dust, hide your limp, and act as if nothing happened. Finally, as commoditization pressures alter the enterprise hardware landscape, the tale of enterprise IT is increasingly becoming a story about software and services. While software’s importance resonates throughout IBM’s overall systems strategy, enlivening and enlightening company initiatives such as OnDemand and Autonomic Computing, the role storage software plays in IBM’s overall efforts is less well articulated. Simply put, we find the first two reasons for obscuring this announcement perfectly reasonable. However, if it betrays a larger incoherence in IBM’s storage strategy, the company needs to act quickly to clarify its course.

Density and Critical Mass

By Jim Balderston

Monet Mobile Networks, a high speed wireless network provider from Kirkland, Washington, declared Chapter 11 bankruptcy this week, and announced it would be shutting down its service to 3,000 subscribers in April. The company offered high-speed wireless networks to small towns in the Midwest, including Fargo, Grand Forks, and Bismarck, North Dakota; Duluth and Moorhead, Minnesota; Eau Claire and Superior, Wisconsin; and Sioux Falls, South Dakota. Subscribers paid $40 per month for the service and purchased GTRAN Wireless DotSurfer or Audiovox wireless modems as well as paying a start-up fee that varied based on the device used to connect with the network. Monet offered home and business rates, and offered connectivity to Windows-based desktops, laptops, and handhelds. The company raised more than $80 million to build out its networks; the company indicated that its assets will be liquidated. Monet had planned to slowly expand its service nationwide. Verizon Wireless announced this year that it plans to offer a service based on similar technology on a nationwide basis and that it planned to spend $1 billion doing so. 

The simple explanation of Monet’s failure may have been that it ran out of money. Or that it was simply too far ahead of customer needs to be successful. Both would certainly do, but we suspect that the real reason for the company’s inability to gain critical mass is that such a mass is larger than a company this size could ever attain. In other words, its reach exceeded its grasp.

It would seem obvious that with its $1 billion investment that such will not be the case with Verizon, as it rolls out its latest generation wireless network service. We don’t expect that the company will take the same approach as Monet, which focused on smaller markets with lower population densities. Instead, Verizon will take the path most notably taken by copper wire phone companies, cable providers, and mobile phone operators: they will start in the densest population areas and move out from there. The logic, of course, is quite simple, the customer return for X amount of infrastructure expenditure is much higher in those areas; they become the cash cows that provide subsidies for the eventual deployment in lower density locales, much the way a large base of health insurance customers spreads the cost of individual care across a broader base of income. It’s all in the actuarial tables. For this reason, however, we suspect that nationwide wireless service is not going to come from a patchwork of  companies, but that the actuarial realities of providing uninterrupted nationwide wireless bandwidth is going to require a further consolidation of wireless companies, which in their present state don’t individually possess the critical mass either.


The Sageza Group, Inc.

32108 Alvarado Blvd #354

Union City, CA 94587

650·390·0700     fax 650·649·2302

London +44 (0) 20·7900·2819

Milan +39 02·9544·1646

 

sageza.com

 

Copyright © 2004 The Sageza Group, Inc. May not be duplicated or retransmitted without written permission.